people building
The Download: the future of AlphaFold, and chatbot privacy concerns
In 2017, fresh off a PhD on theoretical chemistry, John Jumper heard rumors that Google DeepMind had moved on from game-playing AI to a secret project to predict the structures of proteins. He applied for a job. Just three years later, Jumper and CEO Demis Hassabis had led the development of an AI system called AlphaFold 2 that was able to predict the structures of proteins to within the width of an atom, matching lab-level accuracy, and doing it many times faster--returning results in hours instead of months. Last year, Jumper and Hassabis shared a Nobel Prize in chemistry. Now that the hype has died down, what impact has AlphaFold really had? How are scientists using it?
- Asia > China (0.06)
- North America > United States > Massachusetts (0.05)
- Europe > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.05)
- (2 more...)
- Information Technology > Security & Privacy (0.51)
- Media (0.49)
- Government > Regional Government > North America Government > United States Government (0.30)
The Download: AI's coding promises, and OpenAI's longevity push
Ask people building generative AI what generative AI is good for right now--what they're really fired up about--and many will tell you: coding. Everyone from established AI giants to buzzy startups is promising to take coding assistants to the next level. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it from scratch themselves. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence, the hypothetical superhuman technology that a number of top firms claim to have in their sights.Read the full story. When you think of AI's contributions to science, you probably think of AlphaFold, the Google DeepMind protein-folding program that earned its creator a Nobel Prize last year.
Architects of Intelligence: The truth about AI from the people building it: Martin Ford: 9781789954531: Amazon.com: Books
Martin Ford is a futurist and the author of two books: The New York Times Bestselling Rise of the Robots: Technology and the Threat of a Jobless Future (winner of the 2015 Financial Times/McKinsey Business Book of the Year Award and translated into more than 20 languages) and The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, as well as the founder of a Silicon Valley-based software development firm. His TED Talk on the impact of AI and robotics on the economy and society, given on the main stage at the 2017 TED Conference, has been viewed more than 2 million times. Martin is also the consulting artificial intelligence expert for the new "Rise of the Robots Index" from Societe Generale, underlying the Lyxor Robotics & AI ETF, which is focused specifically on investing in companies that will be significant participants in the AI and robotics revolution. He holds a computer engineering degree from the University of Michigan, Ann Arbor and a graduate business degree from the University of California, Los Angeles. He has written about future technology and its implications for publications including The New York Times, Fortune, Forbes, The Atlantic, The Washington Post, Harvard Business Review, The Guardian, and The Financial Times.
- North America > United States > California > Los Angeles County > Los Angeles (0.60)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.27)
Book Review: Architects of Intelligence
Artificial intelligence seems to be the go-to solution to every problem there is (technological in nature or otherwise), and it's only getting worse. A staggering number of both startups and established companies are loudly proclaiming how AI, or machine learning, or deep learning, or whatever is absolutely going to make everything faster, better, cheaper, fairer, and so on. The reason that this sort of breathless and inevitably shallow media-driven enthusiasm for artificial intelligence is effective is because there's just enough of a general understanding of AI for people to know that it can do some cool things, but not so much of an understanding for people to question what it's actually capable of, or whether applying to to a specific problem is a good idea. This is not to say that a lack of understanding is anyone's fault, really: it's hard to define what AI even is, much less communicate how it works. And without the proper context, there's no way to make an informed judgement about the future potential of artificial intelligence.
Microsoft's Insights on AI - DZone AI
I had the opportunity to hear Nuno Costa, Senior Director, Cloud and AI, Microsoft Azure Marketplace during Outsystems' NextStep user conference. While I was not able to speak to Nuno after his presentation on "The AI Transformation," I was able to get the following answers to my questions from Microsoft: Microsoft's goal is to make AI accessible to every organization and help augment human ingenuity through the power of intelligent technology. We do this using a thoughtful approach when designing AI systems that extend and empower human capabilities in all aspects of life. Leading innovation that extends your capabilities: When we add AI capabilities to products (including mainstream products like Bing Search, PowerPoint and Skype translator) they are often rooted in discoveries from Microsoft's research labs. Building powerful platforms that make innovation faster and more accessible: We have created APIs and other tools that developers, customers and data scientists can use to add intelligence into existing products and services or to build new ones.
For better AI, diversify the people building it
Big technology companies get much of the blame when technology behaves badly. But these same companies, says Partnership on AI executive director Terah Lyons, could also be part of the solution in making sure future AI technology works better for the world. Speaking at MIT Technology Review's annual EmTech Digital conference in San Francisco, Lyons presented the Partnership on AI's four-point mission statement and eight tenets that the organization calls its guiding principles. Those tenets include working to protect the privacy and security of individuals, striving to respect the interests of all parties that may be affected by AI advances, helping keep AI researchers socially responsible, ensuring that AI research and technology is robust and safe, and creating a culture of cooperation, trust, and openness among AI scientists to help achieve these goals. The Partnership on AI hopes that these principles will be adopted by the wider technology community.
For better AI, diversify the people building it
Big technology companies get much of the blame when technology behaves badly. But these same companies, says Partnership on AI executive director Terah Lyons, could also be part of the solution in making sure future AI technology works better for the world. Speaking at MIT Technology Review's annual EmTech Digital conference in San Francisco, Lyons presented the Partnership on AI's four-point mission statement and eight tenets that the organization calls its guiding principles. Those tenets include working to protect the privacy and security of individuals, striving to respect the interests of all parties that may be affected by AI advances, helping keep AI researchers socially responsible, ensuring that AI research and technology is robust and safe, and creating a culture of cooperation, trust, and openness among AI scientists to help achieve these goals. The Partnership on AI hopes that these principles will be adopted by the wider technology community.